60 research outputs found

    Approximation Algorithms for Stochastic Inventory Control Models

    Full text link
    Approximation Algorithms for Stochastic Inventory Control Model

    Solving order constraints in logarithmic space.

    Get PDF
    We combine methods of order theory, finite model theory, and universal algebra to study, within the constraint satisfaction framework, the complexity of some well-known combinatorial problems connected with a finite poset. We identify some conditions on a poset which guarantee solvability of the problems in (deterministic, symmetric, or non-deterministic) logarithmic space. On the example of order constraints we study how a certain algebraic invariance property is related to solvability of a constraint satisfaction problem in non-deterministic logarithmic space

    On Sparsification for Computing Treewidth

    Full text link
    We investigate whether an n-vertex instance (G,k) of Treewidth, asking whether the graph G has treewidth at most k, can efficiently be made sparse without changing its answer. By giving a special form of OR-cross-composition, we prove that this is unlikely: if there is an e > 0 and a polynomial-time algorithm that reduces n-vertex Treewidth instances to equivalent instances, of an arbitrary problem, with O(n^{2-e}) bits, then NP is in coNP/poly and the polynomial hierarchy collapses to its third level. Our sparsification lower bound has implications for structural parameterizations of Treewidth: parameterizations by measures that do not exceed the vertex count, cannot have kernels with O(k^{2-e}) bits for any e > 0, unless NP is in coNP/poly. Motivated by the question of determining the optimal kernel size for Treewidth parameterized by vertex cover, we improve the O(k^3)-vertex kernel from Bodlaender et al. (STACS 2011) to a kernel with O(k^2) vertices. Our improved kernel is based on a novel form of treewidth-invariant set. We use the q-expansion lemma of Fomin et al. (STACS 2011) to find such sets efficiently in graphs whose vertex count is superquadratic in their vertex cover number.Comment: 21 pages. Full version of the extended abstract presented at IPEC 201

    Kernel Bounds for Structural Parameterizations of Pathwidth

    Full text link
    Assuming the AND-distillation conjecture, the Pathwidth problem of determining whether a given graph G has pathwidth at most k admits no polynomial kernelization with respect to k. The present work studies the existence of polynomial kernels for Pathwidth with respect to other, structural, parameters. Our main result is that, unless NP is in coNP/poly, Pathwidth admits no polynomial kernelization even when parameterized by the vertex deletion distance to a clique, by giving a cross-composition from Cutwidth. The cross-composition works also for Treewidth, improving over previous lower bounds by the present authors. For Pathwidth, our result rules out polynomial kernels with respect to the distance to various classes of polynomial-time solvable inputs, like interval or cluster graphs. This leads to the question whether there are nontrivial structural parameters for which Pathwidth does admit a polynomial kernelization. To answer this, we give a collection of graph reduction rules that are safe for Pathwidth. We analyze the success of these results and obtain polynomial kernelizations with respect to the following parameters: the size of a vertex cover of the graph, the vertex deletion distance to a graph where each connected component is a star, and the vertex deletion distance to a graph where each connected component has at most c vertices.Comment: This paper contains the proofs omitted from the extended abstract published in the proceedings of Algorithm Theory - SWAT 2012 - 13th Scandinavian Symposium and Workshops, Helsinki, Finland, July 4-6, 201

    On the stable degree of graphs

    No full text
    We define the stable degree s(G) of a graph G by s(G)∈=∈ min max d (v), where the minimum is taken over all maximal independent sets U of G. For this new parameter we prove the following. Deciding whether a graph has stable degree at most k is NP-complete for every fixed k∈≄∈3; and the stable degree is hard to approximate. For asteroidal triple-free graphs and graphs of bounded asteroidal number the stable degree can be computed in polynomial time. For graphs in these classes the treewidth is bounded from below and above in terms of the stable degree

    Fully Dynamic Maintenance of Arc-Flags in Road Networks

    Get PDF
    International audienceThe problem of finding best routes in road networks can be solved by applying Dijkstra's shortest paths algorithm. Unfortunately, road networks deriving from real-world applications are huge yielding unsustainable times to compute shortest paths. For this reason, great research efforts have been done to accelerate Dijkstra's algorithm on road networks. These efforts have led to the development of a number of speed-up techniques, as for example Arc-Flags, whose aim is to compute additional data in a preprocessing phase in order to accelerate the shortest paths queries in an on-line phase. The main drawback of most of these techniques is that they do not work well in dynamic scenarios. In this paper we propose a new algorithm to update the Arc-Flags of a graph subject to edge weight decrease operations. To check the practical performances of the new algorithm we experimentally analyze it, along with a previously known algorithm for edge weight increase operations, on real-world road networks subject to fully dynamic sequences of operations. Our experiments show a significant speed-up in the updating phase of the Arc-Flags, at the cost of a small space and time overhead in the preprocessing phase

    Cluster Editing: Kernelization based on Edge Cuts

    Full text link
    Kernelization algorithms for the {\sc cluster editing} problem have been a popular topic in the recent research in parameterized computation. Thus far most kernelization algorithms for this problem are based on the concept of {\it critical cliques}. In this paper, we present new observations and new techniques for the study of kernelization algorithms for the {\sc cluster editing} problem. Our techniques are based on the study of the relationship between {\sc cluster editing} and graph edge-cuts. As an application, we present an O(n2){\cal O}(n^2)-time algorithm that constructs a 2k2k kernel for the {\it weighted} version of the {\sc cluster editing} problem. Our result meets the best kernel size for the unweighted version for the {\sc cluster editing} problem, and significantly improves the previous best kernel of quadratic size for the weighted version of the problem

    On the Differences between “Practical” and “Applied”

    No full text

    Approximation algorithms for stochastic inventory control models

    No full text
    In this paper we address the long-standing problem of finding computationally efficient and provably good inventory control policies in supply chains with correlated and nonstationary (time-dependent) stochastic demands. This problem arises in many domains and has many practical applications such as dynamic forecast updates (for some applications see Erkip et al. 1990 and Lee et al. 1999). We consider two classical models, the periodic-review stochastic inventory control problem and the stochastic lot-sizing problem with correlated and non-stationary demands. Here the correlation is inter-temporal, i.e., what we observe in the current period changes our forecast for the demand in future periods. We provide what we believe to be the first computationally efficient policies with constant worst-case performance guarantees; that is, there exists a constant C such that, for any given joint distribution of the demands, the expected cost of the policy is guaranteed to be within C times the expected cost of an optimal policy. More specifically, we provide a worst-case performance guarantee of 2 for the periodic-review stochastic inventory control problem, and a performance guarantee of 3 for the stochastic lot-sizing problem. The models. The details of the periodic-review stochastic inventory control problem ar

    Approximation in stochastic scheduling: the power of LP-based priority policies

    Get PDF
    We consider the problem to minimize the total weighted completion time of a set of jobs with individual release dates which have to be scheduled on identical parallel machines. Job processing times are not known in advance, they are realized on-line according to given probability distributions. The aim is to find a scheduling policy that minimizes the objective in expectation. Motivated by the success of LP-based approaches to deterministic scheduling, we present a polyhedral relaxation of the performance space of stochastic parallel machine scheduling. This relaxation extends earlier relaxations that have been used, among others, by Hall et al. [1997] in the deterministic setting. We then derive constant performance guarantees for priority policies which are guided by optimum LP solutions, and thereby generalize previous results from deterministic scheduling. In the absence of release dates, the LP-based analysis also yields an additive performance guarantee for the WSEPT rule which implies both a worst-case performance ratio and a result on its asymptotic optimality, thus complementing previous work by Weiss [1990]. The corresponding LP lower bound generalizes a previous lower bound from deterministic scheduling due to Eastman et al. [1964], and exhibits a relation between parallel machine problems and corresponding problems with only one fast single machine. Finally, we show that all employed LPs can be solved in polynomial time by purely combinatorial algorithms
    • 

    corecore